19 research outputs found

    Contributions for improving debugging of kernel-level services in a monolithic operating system

    Get PDF
    Alors que la recherche sur la qualité du code des systèmes a connu un formidable engouement, les systèmes d exploitation sont encore aux prises avec des problèmes de fiabilité notamment dus aux bogues de programmation au niveau des services noyaux tels que les pilotes de périphériques et l implémentation des systèmes de fichiers. Des études ont en effet montré que chaque version du noyau Linux contient entre 600 et 700 fautes, et que la propension des pilotes de périphériques à contenir des erreurs est jusqu à sept fois plus élevée que toute autre partie du noyau. Ces chiffres suggèrent que le code des services noyau n est pas suffisamment testé et que de nombreux défauts passent inaperçus ou sont difficiles à réparer par des programmeurs non-experts, ces derniers formant pourtant la majorité des développeurs de services. Cette thèse propose une nouvelle approche pour le débogage et le test des services noyau. Notre approche est focalisée sur l interaction entre les services noyau et le noyau central en abordant la question des trous de sûreté dans le code de définition des fonctions de l API du noyau. Dans le contexte du noyau Linux, nous avons mis en place une approche automatique, dénommée Diagnosys, qui repose sur l analyse statique du code du noyau afin d identifier, classer et exposer les différents trous de sûreté de l API qui pourraient donner lieu à des fautes d exécution lorsque les fonctions sont utilisées dans du code de service écrit par des développeurs ayant une connaissance limitée des subtilités du noyau. Pour illustrer notre approche, nous avons implémenté Diagnosys pour la version 2.6.32 du noyau Linux. Nous avons montré ses avantages à soutenir les développeurs dans leurs activités de tests et de débogage.Despite the existence of an overwhelming amount of research on the quality of system software, Operating Systems are still plagued with reliability issues mainly caused by defects in kernel-level services such as device drivers and file systems. Studies have indeed shown that each release of the Linux kernel contains between 600 and 700 faults, and that the propensity of device drivers to contain errors is up to seven times higher than any other part of the kernel. These numbers suggest that kernel-level service code is not sufficiently tested and that many faults remain unnoticed or are hard to fix bynon-expert programmers who account for the majority of service developers. This thesis proposes a new approach to the debugging and testing of kernel-level services focused on the interaction between the services and the core kernel. The approach tackles the issue of safety holes in the implementation of kernel API functions. For Linux, we have instantiated the Diagnosys automated approach which relies on static analysis of kernel code to identify, categorize and expose the different safety holes of API functions which can turn into runtime faults when the functions are used in service code by developers with limited knowledge on the intricacies of kernel code. To illustrate our approach, we have implemented Diagnosys for Linux 2.6.32 and shown its benefits in supporting developers in their testing and debugging tasks.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    An Empirical Assessment of Bellon's Clone Benchmark

    Get PDF
    Context: Clone benchmarks are essential to the assessment and improvement of clone detection tools and algorithms. Among existing benchmarks, Bellon’s benchmark is widely used by the research community. However, a serious threat to the validity of this benchmark is that reference clones it contains have been manually validated by Bellon alone. Other persons may disagree with Bellon’s judgment. Ob-jective: In this paper, we perform an empirical assessment of Bellon’s benchmark. Method: We seek the opinion of eighteen participants on a subset of Bellon’s benchmark to determine if researchers should trust the reference clones it contains. Results: Our experiment shows that a significant amount of the reference clones are debatable, and this phe-nomenon can introduce noise in results obtained using this benchmark

    Large scale interoperability in the context of Future Internet

    Get PDF
    La croissance de l Internet en tant que plateforme d approvisionnement à grande échelled approvisionnement de contenus multimédia a été une grande success story du 21e siécle.Toutefois, les applications multimédia, avec les charactéristiques spécifiques de leur trafic ainsique les les exigences des nouveaux services, posent un défi intéressant en termes de découverte,de mobilité et de gestion. En outre, le récent élan de l Internet des objets a rendu très nécessairela revitalisation de la recherche pour intégrer des sources hétérogènes d information à travers desréseaux divers. Dans cet objectif, les contributions de cette thèse essayent de trouver un équilibreentre l hétérogénéité et l interopérabilité, pour découvrir et intégrer les sources hétérogènesd information dans le contexte de l Internet du Futur.La découverte de sources d information sur différents réseaux requiert une compréhensionapprofondie de la façon dont l information est structurée et quelles méthodes spécifiques sontutilisés pour communiquer. Ce processus a été régulé à l aide de protocoles de découverte.Cependant, les protocoles s appuient sur différentes techniques et sont conçues en prenant encompte l infrastructure réseau sous-jacente, limitant ainsi leur capacité à franchir la limite d unréseau donné. Pour résoudre ce problème, le première contribution dans cette thèse tente detrouver une solution équilibrée permettant aux protocoles de découverte d interagir les uns avecles autres, tout en fournissant les moyens nécessaires pour franchir les frontières entre réseaux.Dans cet objectif, nous proposons ZigZag, un middleware pour réutiliser et étendre les protocolesde découverte courants, conçus pour des réseaux locaux, afin de découvrir des servicesdisponibles dans le large. Notre approche est basée sur la conversion de protocole permettant ladécouverte de service indépendamment de leur protocole de découverte sous-jacent. Toutefois,dans les réaux de grande échelle orientée consommateur, la quantité des messages de découvertepourrait rendre le réseau inutilisable. Pour parer à cette éventualité, ZigZag utilise le conceptd agrégation au cours du processus de découverte. Grâce à l agrégation, ZigZag est capabled intégrer plusieurs réponses de différentes sources supportant différents protocoles de découverte.En outre, la personnalisation du processus d agrégation afin de s aligner sur ses besoins,requiert une compréhension approfondie des fondamentaux de ZigZag. À cette fin, nous proposonsune seconde contribution: un langage flexible pour aider à définir les politiques d unemanière propre et efficace.The growth of the Internet as a large scale media provisioning platform has been a great successstory of the 21st century. However, multimedia applications, with their specific traffic characteristicsand novel service requirements, pose an interesting challenge in terms of discovery,mobility and management. Furthermore, the recent impetus to Internet of things has made it verynecessary, to revitalize research in order to integrate heterogeneous information sources acrossnetworks. Towards this objective, the contributions in this thesis, try to find a balance betweenheterogeneity and interoperability, to discovery and integrate heterogeneous information sourcesin the context of Future Internet.Discovering information sources across networks need a through understanding of how theinformation is structured and what specific methods they follow to communicate. This processhas been regulated with the help of discovery protocols. However, protocols rely on differenttechniques and are designed taking the underlying network infrastructure into account. Thus,limiting the capability of some protocols to cross network boundary. To address this issue, thefirst contribution in this thesis tries to find a balanced solution to enable discovery protocols tointeroperate with each other as well as provide the necessary means to cross network boundaries.Towards this objective, we propose ZigZag, a middleware to reuse and extend current discoveryprotocols, designed for local networks, to discover available services in the large. Our approachis based on protocol translation to enable service discovery irrespectively of their underlyingdiscovery protocol. Although, our approach provides a step forward towards interoperability inthe large. We needed to make sure that discovery messages do not create a bottleneck for thenetwork.In large scale consumer oriented network, service discovery messages could render the networkunusable. To counter this, ZigZag uses the concept of aggregation during the discoveryprocess. Using aggregation ZigZag is able to integrate several replies from different servicesources supporting different discovery protocols. However, to customize the aggregation processto suit once needs, requires a through understanding of ZigZag fundamentals. To this end, wepropose our second contribution, a flexible policy language that can help define policies in aclean and effective way. In addition, the policy language has some added advantages in terms ofdynamic management. It provides features like delegation, runtime time policy management andlogging. We tested our approach with the help of simulations, the results showed that ZigZag canboth reduce the number of messages that flow through the network, and provide value sensitiveinformation to the requesting entity.Although, ZigZag is designed to discover media services in the large. It can very well be usedin other domains like home automation and smart spaces. While, the flexible pluggable modulardesign of the policy language enables it to be used in other applications like for instance, e-mail.BORDEAUX1-Bib.electronique (335229901) / SudocSudocFranceF

    Got Issues? Who Cares About It? A Large Scale Investigation of Issue Trackers from GitHub

    Get PDF
    International audienceFeedback from software users constitutes a vital part in the evolution of software projects. By filing issue reports, users help identify and fix bugs, document software code, and enhance the software via feature requests. Many studies have explored issue reports, proposed approaches to enable the submission of higher-quality reports, and presented techniques to sort, categorize and leverage issues for software engineering needs. Who, however, cares about filing issues? What kind of issues are reported in issue trackers? What kind of correlation exist between issue reporting and the success of software projects? In this study, we address the need for answering such questions by performing an empirical study on a hundred thousands of open source projects. After filtering relevant trackers, the study used about 20,000 projects. We investigate and answer various research questions on the popularity and impact of issue trackers

    Emergent Overlays for Adaptive MANET Broadcast

    Get PDF
    Mobile Ad-Hoc Networks (MANETs) allow distributed applications where no fixed network infrastructure is available. MANETs use wireless communication subject to faults and uncertainty, and must support efficient broadcast. Controlled flooding is suitable for highly-dynamic networks, while overlay-based broadcast is suitable for dense and more static ones. Density and mobility vary significantly over a MANET deployment area. We present the design and implementation of emergent overlays for efficient and reliable broadcast in heterogeneous MANETs. This adaptation technique allows nodes to automatically switch from controlled flooding to the use of an overlay. Interoperability protocols support the integration of both protocols in a single heterogeneous system. Coordinated adaptation policies allow regions of nodes to autonomously and collectively emerge and dissolve overlays. Our simulation of the full network stack of 600 mobile nodes shows that emergent overlays reduce energy consumption, and improve reliability and coverage compared to single protocols and to two previously-proposed adaptation techniques

    Empirical evaluation of bug linking

    Get PDF
    International audienceTo collect software bugs found by users, development teams often setup bug trackers using systems such as Bugzilla. Developers would then fix some of the bugs and commit corresponding code changes into version control systems such as svn or git. Unfortunately, the links between bug reports and code changes are missing for many software projects as the bug tracking and version control systems are often maintained separately. Yet, linking bug reports to fix commits is important as it could shed light into the nature of bug fixing processes and expose patterns in software management. Bug linking solutions, such as ReLink, have been proposed. The demonstration of their effectiveness however faces a number of issues, including a reliability issue with their ground truth datasets as well as the extent of their measurements. We propose in this study a benchmark for evaluating bug linking solutions. This benchmark includes a dataset of about 12,000 bug links from 10 programs. These true links between bug reports and their fixes have been provided during bug fixing processes. We designed a number of research questions, to assess both quantitatively and qualitatively the effectiveness of a bug linking tool. Finally, we apply this benchmark on ReLink to report the strengths and limitations of this bug linking tool

    Approche langage au développement de pilotes de périphériques robustes

    No full text
    Bien que les pilotes de périphériques soient des composants critiques, leur processus de développement est resté rudimentaire malgré le haut niveau d'expertise requis. Ainsi, une récente étude a montré que leur propension à contenir des bogues est jusqu'à sept fois plus importante que celles des autres composants d'un système d'exploitation. Cette thèse propose une nouvelle approche au développement des pilotes de périphériques basée sur les langages dédiés. Nous illustrons notre approche par l'introduction d'un langage dédié à la spécification d'interfaces de programmation de périphériques, nommé Devil. Le traitement d'une spécification Devil commence par son analyse afin de déceler d'éventuelles incohérences. Le code nécessaire pour implémenter la communication entre le périphérique et le pilote est ensuite généré automatiquement. Ce code se décline sous deux formes suivant que l'on désire privilégier les vérifications réalisées sur le pilote ou sa performance à l'exécution.RENNES1-BU Sciences Philo (352382102) / SudocSudocFranceF

    A DSL Approach to Improve Productivity and Safety in Device Drivers Development

    Get PDF
    Although peripheral devices come out at a frantic pace and require fast releases of drivers, little progress has been made to improve the development of drivers. Too often, this development consists of decoding hardware intricacies, based on inaccurate documentation. Then, assembly-level operations need to be used to interact with the device. These low-level operations reduce the readability of the driver and prevent safety properties from being checked
    corecore